generalised bayesian filtering
Generalised Bayesian Filtering via Sequential Monte Carlo
We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification. In particular, we leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs, that can tackle the problem of inference under model misspecification. In doing so, we arrive at principled procedures for robust inference against observation contamination by utilising the $\beta$-divergence. Operationalising the proposed framework is made possible via sequential Monte Carlo methods (SMC), where the standard particle methods, and their associated convergence results, are readily adapted to the new setting. We demonstrate our approach to object tracking and Gaussian process regression problems, and observe improved performance over standard filtering algorithms.
Review for NeurIPS paper: Generalised Bayesian Filtering via Sequential Monte Carlo
Weaknesses: - The authors choose to select \beta based on predictive accuracy. This is sensible, but what other approaches could also be used? And does it make sense to consider predictive accuracy on a separate training dataset? In the SMC community, people usually care more about the efficiency with which you can calculate the likelihood function (in order to estimate the parameters with particle MCMC), the accuracy of the filtered distribution, or ESS. Predictive accuracy is usually not a primary accuracy criterion so does it make sense to select \beta with this metric? From the simulation study, it appears that using predictive accuracy works well, but also seems to be consistently sub-optimal.
Review for NeurIPS paper: Generalised Bayesian Filtering via Sequential Monte Carlo
Reviewers agree that this is a clear contribution to the HMM and SMC set of methods that allows for robustness in the face of likelihood misspecification. Additionally, the method is both theoretically and empirically justified. While the main reviewer concern is novelty, there is agreement that the work is correct, thorough, and effective. An additional concern is the lack of clear attribution of theoretical results (e.g.
Generalised Bayesian Filtering via Sequential Monte Carlo
We introduce a framework for inference in general state-space hidden Markov models (HMMs) under likelihood misspecification. In particular, we leverage the loss-theoretic perspective of Generalized Bayesian Inference (GBI) to define generalised filtering recursions in HMMs, that can tackle the problem of inference under model misspecification. In doing so, we arrive at principled procedures for robust inference against observation contamination by utilising the \beta -divergence. Operationalising the proposed framework is made possible via sequential Monte Carlo methods (SMC), where the standard particle methods, and their associated convergence results, are readily adapted to the new setting. We demonstrate our approach to object tracking and Gaussian process regression problems, and observe improved performance over standard filtering algorithms.